35 research outputs found

    A block-based background model for moving object detection

    Get PDF
    Detecting the moving objects in a video sequence using a stationary camera is an important task for many computer vision applications. This paper proposes a background subtraction approach. As first step, the background is initialized using the block-based analysis before being updated in each incoming frame. Our background frame is generated by collecting the blocks background candidates. The block candidate selection is based on probability density function (pdf) computation. After that, the absolute difference between the background frame and each frame of sequence is computed. A noise filter is applied using the Structure/Texture decomposition in order to minimize the noise caused by background subtraction operation. The binary motion mask is formed using an adaptive threshold that was deduced from the weighted mean and variance calculation. To assure the correspondence between the current frame and the background frame, an adaptation of background model in each incoming frame is realized. After comparing results obtained from the proposed method to other existing ones, it was shown that our approach attains a higher degree of efficac

    Secure eHealth: A Secure eHealth System to Detect COVID Using Image Steganography

    Get PDF
    COVID is a pandemic which has spread to all parts of the world. Detection of COVID infection is crucial to prevent the spread further. Contactless health care systems are essential which can be implemented with Cloud computing. Privacy and security of the medical image data transferred through untrusted channels cannot be ensured. The main aim is to secure the medical details when transferring them from the end device to the cloud and vice versa using image steganography. The medical lung images are masked under a normal and natural cover images

    Image Steganography: A Review of the Recent Advances

    Get PDF
    Image Steganography is the process of hiding information which can be text, image or video inside a cover image. The secret information is hidden in a way that it not visible to the human eyes. Deep learning technology, which has emerged as a powerful tool in various applications including image steganography, has received increased attention recently. The main goal of this paper is to explore and discuss various deep learning methods available in image steganography field. Deep learning techniques used for image steganography can be broadly divided into three categories - traditional methods, Convolutional Neural Network-based and General Adversarial Network-based methods. Along with the methodology, an elaborate summary on the datasets used, experimental set-ups considered and the evaluation metrics commonly used are described in this paper. A table summarizing all the details are also provided for easy reference. This paper aims to help the fellow researchers by compiling the current trends, challenges and some future direction in this field

    Gait recognition for person re-identification

    Get PDF
    Person re-identification across multiple cameras is an essential task in computer vision applications, particularly tracking the same person in different scenes. Gait recognition, which is the recognition based on the walking style, is mostly used for this purpose due to that human gait has unique characteristics that allow recognizing a person from a distance. However, human recognition via gait technique could be limited with the position of captured images or videos. Hence, this paper proposes a gait recognition approach for person re-identification. The proposed approach starts with estimating the angle of the gait first, and this is then followed with the recognition process, which is performed using convolutional neural networks. Herein, multitask convolutional neural network models and extracted gait energy images (GEIs) are used to estimate the angle and recognize the gait. GEIs are extracted by first detecting the moving objects, using background subtraction techniques. Training and testing phases are applied to the following three recognized datasets: CASIA-(B), OU-ISIR, and OU-MVLP. The proposed method is evaluated for background modeling using the Scene Background Modeling and Initialization (SBI) dataset. The proposed gait recognition method showed an accuracy of more than 98% for almost all datasets. Results of the proposed approach showed higher accuracy compared to obtained results of other methods result for CASIA-(B) and OU-MVLP and form the best results for the OU-ISIR dataset

    3D objects and scenes classification, recognition, segmentation, and reconstruction using 3D point cloud data: A review

    Full text link
    Three-dimensional (3D) point cloud analysis has become one of the attractive subjects in realistic imaging and machine visions due to its simplicity, flexibility and powerful capacity of visualization. Actually, the representation of scenes and buildings using 3D shapes and formats leveraged many applications among which automatic driving, scenes and objects reconstruction, etc. Nevertheless, working with this emerging type of data has been a challenging task for objects representation, scenes recognition, segmentation, and reconstruction. In this regard, a significant effort has recently been devoted to developing novel strategies, using different techniques such as deep learning models. To that end, we present in this paper a comprehensive review of existing tasks on 3D point cloud: a well-defined taxonomy of existing techniques is performed based on the nature of the adopted algorithms, application scenarios, and main objectives. Various tasks performed on 3D point could data are investigated, including objects and scenes detection, recognition, segmentation and reconstruction. In addition, we introduce a list of used datasets, we discuss respective evaluation metrics and we compare the performance of existing solutions to better inform the state-of-the-art and identify their limitations and strengths. Lastly, we elaborate on current challenges facing the subject of technology and future trends attracting considerable interest, which could be a starting point for upcoming research studie

    A combined multiple action recognition and summarization for surveillance video sequences

    Get PDF
    Human action recognition and video summarization represent challenging tasks for several computer vision applications including video surveillance, criminal investigations, and sports applications. For long videos, it is difficult to search within a video for a specific action and/or person. Usually, human action recognition approaches presented in the literature deal with videos that contain only a single person, and they are able to recognize his action. This paper proposes an effective approach to multiple human action detection, recognition, and summarization. The multiple action detection extracts human bodies’ silhouette, then generates a specific sequence for each one of them using motion detection and tracking method. Each of the extracted sequences is then divided into shots that represent homogeneous actions in the sequence using the similarity between each pair frames. Using the histogram of the oriented gradient (HOG) of the Temporal Difference Map (TDMap) of the frames of each shot, we recognize the action by performing a comparison between the generated HOG and the existed HOGs in the training phase which represents all the HOGs of many actions using a set of videos for training. Also, using the TDMap images we recognize the action using a proposed CNN model. Action summarization is performed for each detected person. The efficiency of the proposed approach is shown through the obtained results for mainly multi-action detection and recognition

    Intelligent monitoring system for crowd monitoring and social distancing with mask control

    No full text
    Due to the current COVID situation , there's a huge need for crowd control as well as efficient social distancing. Security cameras everywhere but personnel to monitor it, a few. In this project we use crowd counting and detection along with social distancing monitoring which would enable efficient social distancing and control of the crowd intelligently. The lightening of the cumbersome task of the security professionals to monitor and analyze the crowd is done here making smart decision on their behalf. In addition, masks are essential instruments to prevent a corona infection, they are essentials for every individual in a crowd. In this project Non-face mask wearing people can be detected at public places and an alert send for that particular individual which further helps control COVID infections. . Intelligent system achieved by these two tasks will enabled informed decision making , efficient remote monitoring of crowd , proper social distancing and thus achieving safety at essential infrastructures like transport stations, schools, malls, airports, playground, hospitals etc where tracking multiple cameras at the same time would be a hassle for security professionals. In this project we propose a deep learning approach to accurately detect crowd above a certain restriction and make sure the individuals abide by wearing masks and social distancing effectively

    Face recognition and summarization for surveillance video sequences

    No full text
    Face recognition and video summarization represent chal- lenging tasks for several computer vision applications including video surveil- lance, criminal investigations, and sports applications'. For long videos, it is difficult to search within a video for a specific action and/or person. Usually, human action recognition approaches presented in the literature deal with videos that contain only a single person, and they are able to recognize his action. This paper proposes an effective approach to multiple human action detection, recognition, and summarization. The multiple action detection ex- tracts human bodies' silhouette then generates a specific sequence for each one of them using motion detection and tracking method. Each of the extracted sequences is then divided into shots that represent homogeneous actions in the sequence using the similarity between each pair frames. Using the histogram of the oriented gradient (HOG) of the Temporal Difference Map (TDMap) of the frames of each shot, we recognize the action by performing a comparison be- tween the generated HOG and the existed HOGs in the training phase which represents all the HOGs of many actions using a set of videos for trainin
    corecore